A Theoretical Analysis of NDCG Ranking Measures

نویسندگان

  • Yining Wang
  • Liwei Wang
  • Yuanzhi Li
  • Wei Chen
چکیده

A central problem in ranking is to design a measure for evaluation of ranking functions. In this paper we study, from a theoretical perspective, the Normalized Discounted Cumulative Gain (NDCG) which is a family of ranking measures widely used in practice. Although there are extensive empirical studies of the NDCG family, little is known about its theoretical properties. We first show that, whatever the ranking function is, the standard NDCG which adopts a logarithmic discount, converges to 1 as the number of items to rank goes to infinity. On the first sight, this result seems to imply that the standard NDCG cannot differentiate good and bad ranking functions on large datasets, contradicting to its empirical success in many applications. In order to have a deeper understanding of the general NDCG ranking measures, we propose a notion referred to as consistent distinguishability. This notion captures the intuition that a ranking measure should have such a property: For every pair of substantially different ranking functions, the ranking measure can decide which one is better in a consistent manner on almost all datasets. We show that standard NDCG has consistent distinguishability although it converges to the same limit for all ranking functions. We next characterize the set of all feasible discount functions for NDCG according to the concept of consistent distinguishability. Specifically we show that whether an NDCG measure has consistent distinguishability depends on how fast the discount decays; and r−1 is a critical point. We then turn to the cut-off version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for various choices of k and the discount functions. Experimental results on real Web search datasets agree well with the theory.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Theoretical Analysis of NDCG Type Ranking Measures

A central problem in ranking is to design a ranking measure for evaluation of ranking functions. In this paper we study, from a theoretical perspective, the widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures. Although there are extensive empirical studies of NDCG, little is known about its theoretical properties. We first show that, whatever the ranking function is, ...

متن کامل

A Theoretical Analysis of Normalized Discounted Cumulative Gain (NDCG) Ranking Measures

A central problem in ranking is to design a measure for evaluation of ranking functions. In this paper we study, from a theoretical perspective, the Normalized Discounted Cumulative Gain (NDCG) which is a family of ranking measures widely used in practice. Although there are extensive empirical studies of NDCG, little is known about its theoretical properties. We first show that, whatever the r...

متن کامل

On NDCG Consistency of Listwise Ranking Methods

We study the consistency of listwise ranking methods with respect to the popular Normalized Discounted Cumulative Gain (NDCG) criterion. State of the art listwise approaches replace NDCG with a surrogate loss that is easier to optimize. We characterize NDCG consistency of surrogate losses to discover a surprising fact: several commonly used surrogates are NDCG inconsistent. We then show how to ...

متن کامل

Learning to Rank by Optimizing NDCG Measure

Learning to rank is a relatively new field of study, aiming to learn a ranking function from a set of training data with relevancy labels. The ranking algorithms are often evaluated using information retrieval measures, such as Normalized Discounted Cumulative Gain (NDCG) [1] and Mean Average Precision (MAP) [2]. Until recently, most learning to rank algorithms were not using a loss function re...

متن کامل

A Unified View of Loss Functions in Learning to Rank

This paper provides a unified view of loss functions used in learning to rank. Loss function is a key component in learning to rank, because it encodes human knowledge on evaluation of ranking and guides the process of learning. Many loss functions have been proposed in the literature of learning to rank, with different forms and different motivations, and have been exploited in the development...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013